Google Deep Mind: Artificial Intelligence Worries "Preposterous"

Jason Stutman

Posted June 9, 2015

Just a warning: Today’s piece is going to be slightly scathing. I try not to sling mud too often, but this had me particularly worked up.

Google researcher Mustafa Suleyman came out yesterday with some ignorant — and arguably dangerous — comments regarding artificial intelligence.

Despite recent warnings about the potential dangers of AI from Bill Gates, Elon Musk, Steve Wozniak, Stephen Hawking, and many other leading names in tech, Mustafa decided to stick his neck out by telling the Wall Street Journal that these concerns are simply “preposterous.”

Indeed, the inventor of the PC, the co-founder of Apple, the world’s most renowned physicist, and the man making renewable energy and affordable space travel a reality apparently have no idea what they’re talking about when it comes to technology.

Right…

Suleyman certainly has the right motives to defend AI development. His start-up Deep Mind — acquired by Google last year for $400 million — is actively developing advanced machine learning technologies and applications.

Concerns over AI are no doubt bad publicity for the company and have become prominent enough to prompt a response from the head researcher:

Whether it’s Terminator coming to blow us up or mad scientists looking to create quite perverted women robots, this narrative has somehow managed to dominate the entire landscape, which we find really quite remarkable.

No doubt, “killer robots” and “sex robots” have been major topics of focus… as they should be. After all, artificial intelligence is poised to become to single-most powerful tool ever wielded by mankind. It only makes sense we pay attention to safety and ethics.

Suleyman seems to disagree, though, stating:

The way we think about AI is that it’s going to be a hugely powerful tool that we control and that we direct, whose capabilities we limit, just as you do with any other tool that we have in the world around us, whether they’re washing machines or tractors. We’re building them to empower humanity and not to destroy us.

How optimistic…

And eerily similar to the words of theoretical physicist J. Robert Oppenheimer in his own defense regarding the development of the atomic bomb:

But when you come right down to it the reason that we did this job is because it was an organic necessity. If you are a scientist you cannot stop such a thing. If you are a scientist you believe that it is good to find out how the world works; that it is good to find out what the realities are; that it is good to turn over to mankind at large the greatest possible power to control the world and to deal with it according to its lights and its values.

Of course, even atomic bombs don’t have minds of their own, which is exactly why the prospects of AI are that much more threatening…

Suleyman decided to draw an analogy to washing machines and tractors, which, quite frankly, is ridiculous. As pieces of technology, these things are fundamentally different from anything that could be considered even close to AI.

The obvious difference is that washing machines and tractors cannot learn…

Separating the Strong from the Weak

Suleyman’s defense of AI seems, to me, to be a catch-22:

If you limit the ability of a machine to think or learn on its own, then you aren’t actually creating artificial intelligence; you’re only simulating a piece of it. However, if you create a machine with the ability to learn without restraint as we do (true artificial intelligence), then you won’t be able to fully control it… at least no more so than you can control a human being.

The disagreement, then, seems to stem from Suleyman’s rigid misinterpretation of the phrase “artificial intelligence.”

See, over time, people have thrown the term “artificial intelligence” around enough that its meaning has become diluted. We commonly refer to programs like Apple’s Siri and IBM’s Watson as AI, but the fact is, they’re not truly AI. They’re actually what’s called “weak AI,” also known as “applied AI.”

When tech moguls like Bill Gates and Elon Musk speak out about the dangers of AI, they’re not talking about weak AI like Siri or Watson. Instead, they’re referring to what’s called strong AI or “artificial general intelligence” (AGI) — that is, the ability of a machine to perform any intellectual task that a human can.

That being said, Suleyman’s comments tell us one of two things: either a) Deep Mind is developing weak AI and Suleyman’s response to growing AI concerns is purely egocentric or b) Deep Mind is developing strong AI and Suleyman is ignorant enough to think Deep Mind can control it.

Personally, I’m inclined to believe the former scenario, but in either case, Suleyman’s comments are dangerous, to say the least. After all, they support the idea that intelligent machines are nothing to worry about.

Now, to be clear, none of this is to suggest any malicious intent or conspiring going on. It’s not to suggest the inevitable destruction of mankind or anything like that, either.

It is, however, to point out the emerging need for transparency and oversight in AI development.

Elon Musk and Bill Gates aren’t walking around holding signs saying, “The End is Near!” They’re simply saying we need to proceed with caution.

The only thing “preposterous” at this point would be not to do so.

Until next time,

  JS Sig

Jason Stutman

follow basicCheck us out on YouTube!

Angel Publishing Investor Club Discord - Chat Now

Keith Kohl Premium

Introductory

Advanced